This article describes how to use the Kelly criterion to make rational choices when confronted with a risky financial decision, and suggests a way to estimate the most you should be willing to pay for any particular sort of insurance.
The Kelly criterion (which at its core is the idea that the logarithm of your wealth is a better measure of money’s value to you than its absolute value) is well understood by the informed gambling community, and should be more widely known.
If you decide to apply the knowledge in this post, also consult with a financial professional (which as we’ll see later doesn’t include most finance or economics students, and most young financial professionals), and read the disclaimer at the end.
How much is my money worth?
It’s intuitive that the more money you have, the less each additional unit is worth. If you have nothing, a small amount of money is worth a lot – perhaps enabling you to pay your rent and feed your family. If you’re extremely rich, the same amount of money is almost insignificant to you in that you can’t do anything with that money that you couldn’t already do.
The Kelly criterion quantifies this intuition, and suggests that the logarithm of your money is more important than its absolute value. There’s a theoretical basis for using the logarithm that I won’t go into here, and instead take it as given that it makes sense.
A simple gambling application of the Kelly criterion
The simplest application of the Kelly criterion, and perhaps the best known, is betting on the toss of a biased coin.
Let’s say that you have $S$ units of money and a coin that lands heads with probability $p>\frac{1}{2}$. You are given the opportunity to make a series of even-money bets. Specificially, on each bet you can wager from zero to $S$ on the result of the coin-toss, which rationally you should always predict comes up heads. If you guess right, you gain the amount you wagered. If you guess wrong, you lose the amount you wagered.
We can use Kelly criterion to figure out how much of our stake $S$ to wager. Let’s say we wager $x$. If we win (with probability $p$, assuming we bet on heads) we’ll have $S+x$, and if we lose (with probability $1-p$), we’ll have $S-x$. Kelly suggests maximizing the expected value of the logarithm of our money, so we want to maximize $p\mathrm{log}(S+x) + (1-p)\mathrm{log}(S-x)$.
To maximize, we differentiate with respect to $x$, and set the result to zero. That gives us $\frac{p}{S+x} - \frac{1-p}{S-x} = 0$. That comes out to $x=S(2p-1)$.
So if we’re sure to win ($p=1$), we bet everything. If the coin is unbiased ($p=1/2$) we bet nothing, and we bet a linearly increasing amount in between. For example, when the coin comes up heads three quarters of the time ($p=3/4$) we bet half our current stake, since three quarters is halfway between one half and one. (Another way to say this in gambling jargon is that we bet the fraction of our stake equal to our edge).
It’s instructive to compare this to the strategy of always making the wager than maximizes EV. If the coin comes up heads three quarters of the time, the bet with the maximum EV is to bet the entire stake, $S$. After ten throws, we have on average around 57.7 times the original stake. On average, the Kelly strategy gives us around 9.3 times the original stake. It looks like the Kelly strategy performs poorly, but the EV maximization algorithm leaves us with nothing nearly 95 percent of the time, and 1024 times our original stake the remaining 5 percent of the time. Whereas the Kelly strategy leaves us with at least our original stake around 78 percent of the time, never loses everything, and around a quarter of the time gives us over ten times the original stake.
As the number of wagers increases, the probability of losing everything with the EV-maximization strategy tends to 1, but the Kelly strategy continues to perform well.
This exercise was done as an experiment with 61 finance and economics students and some young finance professionals: they were given a coin that comes up heads 60% of the time (re told of this bias), an initial stake of \$25, and were allowed up to three hours to win a maximum of \$250. Using a strategy that reduces the risk of losing everything (like the Kelly criterion) makes perfect sense here: there’s lots of time to grind out the maximum win.
In the experiment, around a third lost everything, most bet on tails at some point, and only around one in five won the maximum of \$250. Read the paper here, and coverage in the Economist here.
Application to Deal or No Deal
“Deal or No Deal” is a UK game show with a simple premise. There’s 22 identical boxes which contain 22 known amounts of money, ranging from 1 penny to £250,000. Althought the amounts are known, which box contains which amount is not. The contestant opens up the boxes one at a time, and wins whatever’s in the last box opened. That would be an uninteresting random choice, so between each round, “the banker” rings them up and offers them an amount of money (“the deal”) which they can take and leave the game immediately without opening any more boxes. In the show, there is also some theatrical interest generated by the fact that the boxes are held by friends and family of the contestant.
In an actual episode of the show, a contestant had opened 20 of the boxes, leaving the last two boxes, which given the contents of the previously opened boxes, were known to contain the minimum 1 penny and the maximum £250,000. The banker offered them a deal of £88,000.
The expected value of this situation is £125,000, which is significantly higher than the banker’s offer.
Even though the deal gives up £37,000 in expected value, I think many people would take this offer: a sure £88,000 is better than half a chance at £250,000, and they’d feel awful if they lost. But under what conditions is this rational?
Again, the Kelly criterion can be used to answer this, based on the total amount of money the person playing the game has. The idea of using the total wealth of the player makes sense, because if they have little, then they should prefer to minimize risk and take the sure £88,000. If they have a lot of money already, then they should care less about the risk and play on.
Rounding down 1 penny to zero to make the calculations a bit shorter, and let’s say they have a total wealth of $W$. We want to find $W$ such that the expectation of the logarithm of their total wealth is higher if they choose to play on.
If they play on, then with probability $1/2$ they will win nothing and their total wealth will remain the same, and with probability $1/2$ they will gain £250,000. Obviously if they choose the deal, they’ll gain £88,000 without any risk.
Upon taking the deal, the expected logarithm of their wealth will be $\mathrm{log}(W+88,000)$. Playing on, the expected logarithm of their wealth will be $\mathrm{log}(W)/2 + \mathrm{log}(W+250,000)/2$. Solving $\mathrm{log}(W+88,000) \lt \mathrm{log}(W)/2 + \mathrm{log}(W+250,000)/2 $ gives (approximately) $W \gt 104648$.
That’s to say, if the contestant has wealth of £105,000 or more and wants to make a rational decision (according to the Kelly criterion), they should reject the deal and play on. If they have less than this, they should accept the £88,000 and go home.
In the actual show, the contestant chose “no deal”, and subsequently won 1 penny. If they were already worth more than £105,000 they can console themselves with the fact that they made the right choice. If not, they can console themselves with the fact that most finance and economics students and finance professionals make equally poor choices. Either way, I suppose they weren’t so happy with the outcome.
Car insurance
The previous examples have shown us that the Kelly criterion can be used in probabilistic situations where it’s rational to give up some expected value in order to lower risk. This can also apply to buying insurance.
As a worked example, consider buying first party car insurance for your car. That means you’ll be insured for damage to your own car, up to the cost of replacing the car. In most places, this sort of insurance is optional, whereas third party insurance (damage to other people’s cars) is mandatory. Let’s assume your car is worth \$30,000, and the policy costs \$500. Most car insurance has a deductible, where you have to pay the first \$1,000 or so of any claims, but let’s ignore that for simplicity.
Without actuarial tables or other statistical information, we don’t know how likely it is we’ll claim on the policy at various amounts. But as a gross simplification, let’s assume the insurance company makes 10% (on average) on the policy, and that claims are either for the total value of the car or zero. These assumptions make the best case for us wanting the policy (unless we know that the insurance company isn’t making a profit), so if the Kelly criterion tells us with these assumptions we shouldn’t take the insurance, we can be sure it’s a bad idea.
From these gross assumptions, we have a probability of (500-10%)/30,000 (or 1.5 percent) of making the claim of \$30,000. Like in the Deal or No Deal example, let’s figure out how much wealth we need to reject the insurance policy.
We want to take the insurance if $(1-p)\mathrm{log}(W) + p\mathrm{log}(W-30,000) < \mathrm{log}(W-500)$, where $W$ is our wealth, and $p$ is our probability of wrecking the car, which we’ve estimated at 1.5 percent.
In this case, solving for $W$ tells us that we should reject the policy if we have more than \$153,193 in wealth.
If we want more accurate numbers, we can try to model or estimate the probabilties of various sized claims, include deductibles, and perhaps better estimate the average profit of the insurance company. For example, if we believe that car insurance is sold at no profit (or even negative profit) to the insurance company, we should buy insurance no matter how much wealth we have.
House insurance
Moving up to larger sums of money, let’s assume we have a house worth \$500,000 (ignoring the cost of the land, since fire will not destroy that), and a total wealth of \$1M. How much should we be willing to pay for building fire insurance? As in the car insurance case, let’s ignore small claims and assume that either we either claim for the total destruction of the house or not at all. Let’s also assume this is just insurance for the building and not for the contents, which are insured separately.
We know that such fires are rare. I’ve seen a house burning down, but I know of no one to whom it’s happened. I’ll guess at a probability of one in 50,000 per year, but I suspect that’s more likely to be an overestimate than an underestimate.
Doing the same Kelly sums as before, we want to buy insurance for a cost of $C$ if: $ \mathrm{log}(1,000,000 - C) > p\mathrm{log}(1,000,000 - 500,000) + (1-p)\mathrm{log}(1,000,000)$ where $p$ is our estimated 1/50,000.
This comes out to around \$14 (at which price the insurance company makes about \$4 on average). This was a surprise to me, as my intuition here was that the cost that one’s willing to pay would be much higher, since the value of the house is a significant fraction of the total worth, but the fact that the probabilities are so small make the value of the insurance small.
Weaknesses of the Kelly criterion
If you try to use the Kelly criterion to quantify risks that could potentially cost everything you have (for example, if you want to consider how much to pay for liability insurance), then you discover that the methods described in this blog post will say that no price (up to your total wealth) is too much.
That’s not a problem of Kelly, but rather of the simplistic measurement of wealth as the total amount of money you currently have. In reality, losing everything is not usually infinitely bad: if one can work, some money can be regained, and in many places in the world even if you can’t work, the state will support you in some limited way. That’s not to say that losing everything isn’t bad, since it obviously is, but rather that it’s not infinitely bad, as taking the logarithm of the total amount of money you have would judge it.
Conclusion
The Kelly criterion is a conservative approach to taking risks, but when you apply it to real-world insurance scenarios, mostly you’ll find it suggests you shouldn’t take optional insurance if you can take the cost of a loss, even it would be painful to do so.
That makes sense, because people are generally overly risk-averse, and insurance companies will price policies at levels the market will withstand, so if you try to be optimally risk-averse then you’ll find policies are overpriced.
Disclaimer
This post doesn’t contain financial advice, and I don’t take responsibility if you decide to buy or not buy insurance in any situation and it doesn’t work out for you.